# Multilingual Instruction Following
Qwen3 1.7B GGUF
Apache-2.0
Qwen3 is the latest generation of the Tongyi Qianwen series of large language models, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Q
QuantFactory
333
1
Qwen3 235B A22B FP8 Dynamic
Apache-2.0
The FP8 quantized version of the Qwen3-235B-A22B model, which effectively reduces GPU memory requirements and improves computational throughput, suitable for various natural language processing scenarios.
Large Language Model
Transformers

Q
RedHatAI
2,198
2
Qwen3 14B FP8 Dynamic
Apache-2.0
Qwen3-14B-FP8-dynamic is an optimized large language model. By quantizing activation values and weights to the FP8 data type, it effectively reduces GPU memory requirements and improves computational throughput.
Large Language Model
Transformers

Q
RedHatAI
167
1
Qwen3 32B GGUF
Apache-2.0
Qwen3 is the latest version of the Tongyi Qianwen series of large language models, offering a range of dense and mixture-of-experts (MoE) models with groundbreaking advancements in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Q
Qwen
22.65k
35
Qwen3 8B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete suite of dense models and mixture-of-experts (MoE) models. Based on large-scale training, Qwen3 achieves breakthrough progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Transformers

Q
Mungert
1,931
7
Qwen3 0.6B FP8
Apache-2.0
Qwen3-0.6B-FP8 is the latest version in the Tongyi Qianwen series of large language models, offering a 0.6B-parameter FP8 quantized version that supports free switching of mind modes and multilingual tasks.
Large Language Model
Transformers

Q
Qwen
5,576
43
Granite 3.2 2b Instruct GGUF
Apache-2.0
Granite-3.2-2B-Instruct is a 2-billion-parameter long-context AI model, fine-tuned for cognitive reasoning capabilities, supporting 12 languages and multitasking.
Large Language Model
G
ibm-research
1,476
7
Granite 3.2 8b Instruct GGUF
Apache-2.0
Granite-3.2-8B-Instruct is an 8-billion-parameter long-context AI model, specifically fine-tuned for cognitive reasoning capabilities, supporting multiple languages and tasks.
Large Language Model
Transformers

G
ibm-research
1,059
5
Phi 3 Medium 4k Instruct Abliterated V3 GGUF
MIT
This is an orthogonalized version of microsoft/Phi-3-medium-4k-instruct, where specific techniques suppress the model's tendency to reject, preserving as much of the original model's knowledge and capabilities as possible.
Large Language Model Other
P
failspy
85
26
Phi 3 Medium 4k Instruct
MIT
Phi-3-Medium-4K-Instruct is a 14-billion-parameter lightweight open-source model focusing on high-quality reasoning capabilities, supporting 4K context length, suitable for commercial and research purposes in English environments.
Large Language Model
Transformers Other

P
microsoft
43.60k
219
Mixtral 8x7B Instruct V0.1 HF
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts large language model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
LoneStriker
45
4
Guanaco 7b Leh V2
Gpl-3.0
A multilingual instruction-following language model based on LLaMA 7B, supporting English, Chinese, and Japanese, suitable for chatbots and instruction-following tasks.
Large Language Model
Transformers Supports Multiple Languages

G
KBlueLeaf
474
37
Featured Recommended AI Models